人们普遍认为,在传输学习中,包括更多的预训练数据可以转化为更好的性能。但是,最近的证据表明,从源数据集中删除数据实际上也可以提供帮助。在这项工作中,我们仔细研究了源数据集在转移学习中的作用,并提出了探索其对下游性能的影响的框架。我们的框架产生了新的功能,例如精确转移学习脆弱性以及检测诸如数据渗漏等病理和源数据集中存在误导示例之类的病理。特别是,我们证明,消除通过框架确定的有害数据点可改善来自ImageNet的转移学习绩效,以了解各种目标任务。代码可从https://github.com/madrylab/data-transfer获得
translated by 谷歌翻译
使用转移学习将预先训练的“源模型”调整为下游“目标任务”可以大大提高性能,而似乎没有缺点。在这项工作中,我们证明毕竟可能存在一个缺点:偏差转移或源模型偏见的趋势,即使将模型调整为目标类别后,也可以持续存在。通过合成和自然实验的组合,我们表明偏差转移(a)是在现实设置中(例如,在图像网或其他标准数据集上进行预训练时)以及(b)即使明确数据也可能发生(b) - 偏见。随着转移学习的模型越来越多地在现实世界中部署,我们的工作突出了理解预训练源模型的局限性的重要性。代码可从https://github.com/madrylab/bias-transfer获得
translated by 谷歌翻译
现有的方法用于隔离数据集中的硬群和虚假相关性通常需要人为干预。这可以使这些方法具有劳动密集型和特定于数据集的特定方式。为了解决这些缺点,我们提出了一种自动提炼模型故障模式的可扩展方法。具体而言,我们利用线性分类器来识别一致的误差模式,然后又诱导这些故障模式作为特征空间内的方向的自然表示。我们证明,该框架使我们能够发现并自动为培训数据集中的子群体提起挑战,并进行干预以改善模型对这些亚群的绩效。可在https://github.com/madrylab/failure-directions上找到代码
translated by 谷歌翻译
缺失或缺乏输入功能,是许多模型调试工具的基础概念。但是,在计算机视觉中,不能简单地从图像中删除像素。因此,一种倾向于诉诸启发式方法,例如涂黑像素,这反过来又可能引入调试过程中的偏见。我们研究了这样的偏见,特别是展示了基于变压器的架构如何使遗失性更自然地实施,哪些侧架来侧翼这些问题并提高了实践中模型调试的可靠性。我们的代码可从https://github.com/madrylab/missingness获得
translated by 谷歌翻译
为了改善模型概括,模型设计师通常会隐式或显式地限制其模型使用的功能。在这项工作中,我们通过将其视为数据的不同观点来探讨利用此类特征先验的设计空间。具体而言,我们发现经过多种功能先验训练的模型具有较少的重叠故障模式,因此可以更有效地组合。此外,我们证明,在其他(未标记的)数据上共同训练此类模型使他们能够纠正彼此的错误,这反过来又导致对虚假相关性的更好的概括和韧性。可在https://github.com/madrylab/copriors上找到代码
translated by 谷歌翻译
Multiple studies have focused on predicting the prospective popularity of an online document as a whole, without paying attention to the contributions of its individual parts. We introduce the task of proactively forecasting popularities of sentences within online news documents solely utilizing their natural language content. We model sentence-specific popularity forecasting as a sequence regression task. For training our models, we curate InfoPop, the first dataset containing popularity labels for over 1.7 million sentences from over 50,000 online news documents. To the best of our knowledge, this is the first dataset automatically created using streams of incoming search engine queries to generate sentence-level popularity annotations. We propose a novel transfer learning approach involving sentence salience prediction as an auxiliary task. Our proposed technique coupled with a BERT-based neural model exceeds nDCG values of 0.8 for proactive sentence-specific popularity forecasting. Notably, our study presents a non-trivial takeaway: though popularity and salience are different concepts, transfer learning from salience prediction enhances popularity forecasting. We release InfoPop and make our code publicly available: https://github.com/sayarghoshroy/InfoPopularity
translated by 谷歌翻译
Speech systems are sensitive to accent variations. This is especially challenging in the Indian context, with an abundance of languages but a dearth of linguistic studies characterising pronunciation variations. The growing number of L2 English speakers in India reinforces the need to study accents and L1-L2 interactions. We investigate the accents of Indian English (IE) speakers and report in detail our observations, both specific and common to all regions. In particular, we observe the phonemic variations and phonotactics occurring in the speakers' native languages and apply this to their English pronunciations. We demonstrate the influence of 18 Indian languages on IE by comparing the native language pronunciations with IE pronunciations obtained jointly from existing literature studies and phonetically annotated speech of 80 speakers. Consequently, we are able to validate the intuitions of Indian language influences on IE pronunciations by justifying pronunciation rules from the perspective of Indian language phonology. We obtain a comprehensive description in terms of universal and region-specific characteristics of IE, which facilitates accent conversion and adaptation of existing ASR and TTS systems to different Indian accents.
translated by 谷歌翻译
In molecular research, simulation \& design of molecules are key areas with significant implications for drug development, material science, and other fields. Current classical computational power falls inadequate to simulate any more than small molecules, let alone protein chains on hundreds of peptide. Therefore these experiment are done physically in wet-lab, but it takes a lot of time \& not possible to examine every molecule due to the size of the search area, tens of billions of dollars are spent every year in these research experiments. Molecule simulation \& design has lately advanced significantly by machine learning models, A fresh perspective on the issue of chemical synthesis is provided by deep generative models for graph-structured data. By optimising differentiable models that produce molecular graphs directly, it is feasible to avoid costly search techniques in the discrete and huge space of chemical structures. But these models also suffer from computational limitations when dimensions become huge and consume huge amount of resources. Quantum Generative machine learning in recent years have shown some empirical results promising significant advantages over classical counterparts.
translated by 谷歌翻译
Developing and least developed countries face the dire challenge of ensuring that each child in their country receives required doses of vaccination, adequate nutrition and proper medication. International agencies such as UNICEF, WHO and WFP, among other organizations, strive to find innovative solutions to determine which child has received the benefits and which have not. Biometric recognition systems have been sought out to help solve this problem. To that end, this report establishes a baseline accuracy of a commercial contactless palmprint recognition system that may be deployed for recognizing children in the age group of one to five years old. On a database of contactless palmprint images of one thousand unique palms from 500 children, we establish SOTA authentication accuracy of 90.85% @ FAR of 0.01%, rank-1 identification accuracy of 99.0% (closed set), and FPIR=0.01 @ FNIR=0.3 for open-set identification using PalmMobile SDK from Armatura.
translated by 谷歌翻译
Selective classification involves identifying the subset of test samples that a model can classify with high accuracy, and is important for applications such as automated medical diagnosis. We argue that this capability of identifying uncertain samples is valuable for training classifiers as well, with the aim of building more accurate classifiers. We unify these dual roles by training a single auxiliary meta-network to output an importance weight as a function of the instance. This measure is used at train time to reweight training data, and at test-time to rank test instances for selective classification. A second, key component of our proposal is the meta-objective of minimizing dropout variance (the variance of classifier output when subjected to random weight dropout) for training the metanetwork. We train the classifier together with its metanetwork using a nested objective of minimizing classifier loss on training data and meta-loss on a separate meta-training dataset. We outperform current state-of-the-art on selective classification by substantial margins--for instance, upto 1.9% AUC and 2% accuracy on a real-world diabetic retinopathy dataset. Finally, our meta-learning framework extends naturally to unsupervised domain adaptation, given our unsupervised variance minimization meta-objective. We show cumulative absolute gains of 3.4% / 3.3% accuracy and AUC over the other baselines in domain shift settings on the Retinopathy dataset using unsupervised domain adaptation.
translated by 谷歌翻译